When the Cramér-rao Inequality Provides No Information
نویسنده
چکیده
We investigate a one-parameter family of probability densities (related to the Pareto distribution, which describes many natural phenomena) where the Cramér-Rao inequality provides no information. 1. Cramér-Rao Inequality One of the most important problems in statistics is estimating a population parameter from a finite sample. As there are often many different estimators, it is desirable to be able to compare them and say in what sense one estimator is better than another. One common approach is to take the unbiased estimator with smaller variance. For example, if X1, . . . , Xn are independent random variables uniformly distributed on [0, θ], Yn = maxi Xi and X = (X1 + · · ·+Xn)/n, then n+1 n Yn and 2X are both unbiased estimators of θ but the former has smaller variance than the latter and therefore provides a tighter estimate. Two natural questions are (1) which estimator has the minimum variance, and (2) what bounds are available on the variance of an unbiased estimator? The first question is very hard to solve in general. Progress towards its solution is given by the Cramér-Rao inequality, which provides a lower bound for the variance of an unbiased estimator (and thus if we find an estimator that achieves this, we can conclude that we have a minimum variance unbiased estimator). Date: February 5, 2008. 2000 Mathematics Subject Classification. 62B10 (primary), 62F12, 60E05 (secondary).
منابع مشابه
Cramér-Rao and moment-entropy inequalities for Renyi entropy and generalized Fisher information
The moment-entropy inequality shows that a continuous random variable with given second moment and maximal Shannon entropy must be Gaussian. Stam’s inequality shows that a continuous random variable with given Fisher information and minimal Shannon entropy must also be Gaussian. The CramérRao inequality is a direct consequence of these two inequalities. In this paper the inequalities above are ...
متن کاملGeneralization of Cramér-Rao and Bhattacharyya inequalities for the weighted covariance matrix
The paper considers a family of probability distributions depending on a parameter. The goal is to derive the generalized versions of Cramér-Rao and Bhattacharyya inequalities for the weighted covariance matrix and of the Kullback inequality for the weighted Kullback distance, which are important objects themselves [9, 23, 28]. The asymptotic forms of these inequalities for a particular family ...
متن کاملOn multidimensional generalized Cramér-Rao inequalities, uncertainty relations and characterizations of generalized q-Gaussian distributions
In the present work, we show how the generalized Cramér-Rao inequality for the estimation of a parameter, presented in a recent paper, can be extended to the mutidimensional case with general norms on Rn, and to a wider context. As a particular case, we obtain a new multidimensional Cramér-Rao inequality which is saturated by generalized q-Gaussian distributions. We also give another related Cr...
متن کاملImproved Cramer-Rao Inequality for Randomly Censored Data
As an application of the improved Cauchy-Schwartz inequality due to Walker (Statist. Probab. Lett. (2017) 122:86-90), we obtain an improved version of the Cramer-Rao inequality for randomly censored data derived by Abdushukurov and Kim (J. Soviet. Math. (1987) pp. 2171-2185). We derive a lower bound of Bhattacharya type for the mean square error of a parametric function based on randomly censor...
متن کاملNotes on the Cramér-Rao Inequality
Suppose X is a random variable with pdf fX(x; θ), θ being an unknown parameter. Let X1, . . . , Xn be a random sample and θ̂ = θ̂(X1, . . . , Xn). We’ve seen that E(θ̂), or rather E(θ̂) − θ, is a measure of how biased θ̂ is. We’ve also seen that V ar(θ̂) provides a measure of efficiency, i.e., the smaller the variance of θ̂, the more likely E(θ̂) will provide an accurate estimate of θ. Given a specific...
متن کامل